tg-me.com/ai_python_en/2175
Last Update:
The Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Deviance Information Criterion (DIC) are perhaps the most widely-used information criteria (IC) in model building and selection. A fourth, Minimum Description Length (MDL), is closely related to the BIC. In a nutshell, they provide guidance as which alternative model provides the most "bang for buck," i.e., the best fit after penalizing for model complexity. Penalizing for complexity is important since, given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice. In line with Occam's razor, complex models sometimes perform poorly on data not used in the model building. There are several others, including AIC3, SABIC, and CAIC, and no clear consensus among authorities as far as I am aware as to which is "best" overall. IC will not necessarily agree on which model should be chosen. Cross-validation, Predicted Residual Error Sum of Squares (PRESS) statistic, a kind of cross-validation, and Mallows’ Cp are also used instead of IC. Information criteria are covered in varying levels in detail in most statistics textbooks and are the subject of numerous academic papers. I know of no single go-to source on this topic.
❇️ @AI_Python_EN
BY AI, Python, Cognitive Neuroscience
Warning: Undefined variable $i in /var/www/tg-me/post.php on line 283
Share with your friend now:
tg-me.com/ai_python_en/2175